Neural compression offers a domain-agnostic approach to creating codecs for lossy or lossless compression via deep generative models. For sequence compression, however, most deep sequence models have costs that scale with the sequence length rather than the sequence complexity. In this work, we instead treat data sequences as observations from an underlying continuous-time process and learn how to efficiently discretize while retaining information about the full sequence. As a consequence of decoupling sequential information from its temporal discretization, our approach allows for greater compression rates and smaller computational complexity. Moreover, the continuous-time approach naturally allows us to decode at different time intervals. We empirically verify our approach on multiple domains involving compression of video and motion capture sequences, showing that our approaches can automatically achieve reductions in bit rates by learning how to discretize.
translated by 谷歌翻译
连续归一化流(CNF)是一类生成模型,可以通过求解普通的微分方程(ODE)将先验分布转换为模型分布。我们建议通过最大程度地减少概率路径差异(PPD)来训练CNF,这是CNF产生的概率密度路径与目标概率密度路径之间的新型差异家族。 PPD是使用对数质量保护公式制定的,该公式是线性的一阶部分微分方程,将对数目标概率和CNF的定义向量场进行配方。 PPD比现有方法具有多个关键好处:它避免了在迭代中解决颂歌的需求,很容易应用于歧管数据,比例到高维度,并与大型目标路径兼容,该目标路径在有限的时间内插值纯噪声和数据。从理论上讲,PPD显示为结合经典概率差异。从经验上讲,我们表明,通过最小化PPD实现最新的CNF在现有的低维歧管基准上获得了最新的可能性和样品质量,并且是生成模型以扩展到中度高维歧管的第一个示例。
translated by 谷歌翻译
离散和连续分布之间的映射是一项艰巨的任务,许多人不得不诉诸启发方法。我们提出了一种基于镶嵌的方法,该方法直接学习连续空间中的量化边界,并具有精确的可能性评估。这是通过使用具有有效的对数决定性jacobian的简单同态形态来构建凸多属凸的归一化流程来完成的。我们在两个应用程序设置中探索了这种方法,从离散到连续的映射,反之亦然。首先,Voronoi的消除化允许在多维空间中自动学习量化边界。边界的位置和区域之间的距离可以编码量化离散值之间的有用的结构关系。其次,无论混合组件的数量如何,Voronoi混合模型都具有恒定的计算成本,可用于可能性评估。从经验上讲,我们显示了对一系列结构化数据模式的现有方法的改进。
translated by 谷歌翻译
我们对通过歧管(例如球形,Tori和其他隐式表面)描述的复杂几何形状的学习生成模型感兴趣。现有(欧几里德)生成模型的当前延伸仅限于特定几何形状,并且通常遭受高计算成本。我们介绍了Moser Flow(MF),是连续标准化流量(CNF)系列内的一类新的生成型号。 MF还通过解决方案产生CNF,然而,与其他CNF方法不同,其模型(学习)密度被参数化,因为源(先前)密度减去神经网络(NN)的发散。分歧是局部线性差分操作员,易于近似和计算歧管。因此,与其他CNFS不同,MF不需要在训练期间通过颂歌求解器调用或反向。此外,将模型密度明确表示为NN的发散而不是作为颂歌的解决方案有助于学习高保真密度。从理论上讲,我们证明了MF在合适的假设下构成了通用密度近似器。经验上,我们首次证明了流动模型的使用从一般曲面采样,并在挑战地球和气候的挑战性几何形状和现实世界基准中实现了密度估计,样本质量和培训复杂性的显着改善科学。
translated by 谷歌翻译
Relational machine learning studies methods for the statistical analysis of relational, or graph-structured, data. In this paper, we provide a review of how such statistical models can be "trained" on large knowledge graphs, and then used to predict new facts about the world (which is equivalent to predicting new edges in the graph). In particular, we discuss two fundamentally different kinds of statistical relational models, both of which can scale to massive datasets. The first is based on latent feature models such as tensor factorization and multiway neural networks. The second is based on mining observable patterns in the graph. We also show how to combine these latent and observable models to get improved modeling power at decreased computational cost. Finally, we discuss how such statistical models of graphs can be combined with text-based information extraction methods for automatically constructing knowledge graphs from the Web. To this end, we also discuss Google's Knowledge Vault project as an example of such combination.
translated by 谷歌翻译
This paper extends quantile factor analysis to a probabilistic variant that incorporates regularization and computationally efficient variational approximations. By means of synthetic and real data experiments it is established that the proposed estimator can achieve, in many cases, better accuracy than a recently proposed loss-based estimator. We contribute to the literature on measuring uncertainty by extracting new indexes of low, medium and high economic policy uncertainty, using the probabilistic quantile factor methodology. Medium and high indexes have clear contractionary effects, while the low index is benign for the economy, showing that not all manifestations of uncertainty are the same.
translated by 谷歌翻译
We introduce organism networks, which function like a single neural network but are composed of several neural particle networks; while each particle network fulfils the role of a single weight application within the organism network, it is also trained to self-replicate its own weights. As organism networks feature vastly more parameters than simpler architectures, we perform our initial experiments on an arithmetic task as well as on simplified MNIST-dataset classification as a collective. We observe that individual particle networks tend to specialise in either of the tasks and that the ones fully specialised in the secondary task may be dropped from the network without hindering the computational accuracy of the primary task. This leads to the discovery of a novel pruning-strategy for sparse neural networks
translated by 谷歌翻译
Overfitting is a problem in Convolutional Neural Networks (CNN) that causes poor generalization of models on unseen data. To remediate this problem, many new and diverse data augmentation methods (DA) have been proposed to supplement or generate more training data, and thereby increase its quality. In this work, we propose a new data augmentation algorithm: VoronoiPatches (VP). We primarily utilize non-linear recombination of information within an image, fragmenting and occluding small information patches. Unlike other DA methods, VP uses small convex polygon-shaped patches in a random layout to transport information around within an image. Sudden transitions created between patches and the original image can, optionally, be smoothed. In our experiments, VP outperformed current DA methods regarding model variance and overfitting tendencies. We demonstrate data augmentation utilizing non-linear re-combination of information within images, and non-orthogonal shapes and structures improves CNN model robustness on unseen data.
translated by 谷歌翻译
An optimal delivery of arguments is key to persuasion in any debate, both for humans and for AI systems. This requires the use of clear and fluent claims relevant to the given debate. Prior work has studied the automatic assessment of argument quality extensively. Yet, no approach actually improves the quality so far. Our work is the first step towards filling this gap. We propose the task of claim optimization: to rewrite argumentative claims to optimize their delivery. As an initial approach, we first generate a candidate set of optimized claims using a sequence-to-sequence model, such as BART, while taking into account contextual information. Our key idea is then to rerank generated candidates with respect to different quality metrics to find the best optimization. In automatic and human evaluation, we outperform different reranking baselines on an English corpus, improving 60% of all claims (worsening 16% only). Follow-up analyses reveal that, beyond copy editing, our approach often specifies claims with details, whereas it adds less evidence than humans do. Moreover, its capabilities generalize well to other domains, such as instructional texts.
translated by 谷歌翻译
We consider distributed learning in the presence of slow and unresponsive worker nodes, referred to as stragglers. In order to mitigate the effect of stragglers, gradient coding redundantly assigns partial computations to the worker such that the overall result can be recovered from only the non-straggling workers. Gradient codes are designed to tolerate a fixed number of stragglers. Since the number of stragglers in practice is random and unknown a priori, tolerating a fixed number of stragglers can yield a sub-optimal computation load and can result in higher latency. We propose a gradient coding scheme that can tolerate a flexible number of stragglers by carefully concatenating gradient codes for different straggler tolerance. By proper task scheduling and small additional signaling, our scheme adapts the computation load of the workers to the actual number of stragglers. We analyze the latency of our proposed scheme and show that it has a significantly lower latency than gradient codes.
translated by 谷歌翻译